Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Lai, Yuan (Ed.)Mistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there is no consensus regarding optimal methods for quantifying uncertainty. Our purpose is to critically evaluate methods for quantifying uncertainty in deep learning for healthcare applications and propose a conceptual framework for specifying certainty of deep learning predictions. We searched Embase, MEDLINE, and PubMed databases for articles relevant to study objectives, complying with PRISMA guidelines, rated study quality using validated tools, and extracted data according to modified CHARMS criteria. Among 30 included studies, 24 described medical imaging applications. All imaging model architectures used convolutional neural networks or a variation thereof. The predominant method for quantifying uncertainty was Monte Carlo dropout, producing predictions from multiple networks for which different neurons have dropped out and measuring variance across the distribution of resulting predictions. Conformal prediction offered similar strong performance in estimating uncertainty, along with ease of interpretation and application not only to deep learning but also to other machine learning approaches. Among the six articles describing non-imaging applications, model architectures and uncertainty estimation methods were heterogeneous, but predictive performance was generally strong, and uncertainty estimation was effective in comparing modeling methods. Overall, the use of model learning curves to quantify epistemic uncertainty (attributable to model parameters) was sparse. Heterogeneity in reporting methods precluded the performance of a meta-analysis. Uncertainty estimation methods have the potential to identify rare but important misclassifications made by deep learning models and compare modeling methods, which could build patient and clinician trust in deep learning applications in healthcare. Efficient maturation of this field will require standardized guidelines for reporting performance and uncertainty metrics.more » « less
-
The complexity of transplant medicine pushes the boundaries of innate, human reasoning. From networks of immune modulators to dynamic pharmacokinetics to variable postoperative graft survival to equitable allocation of scarce organs, machine learning promises to inform clinical decision making by deciphering prodigious amounts of available data. This paper reviews current research describing how algorithms have the potential to augment clinical practice in solid organ transplantation. We provide a general introduction to different machine learning techniques, describing their strengths, limitations, and barriers to clinical implementation. We summarize emerging evidence that recent advances that allow machine learning algorithms to predict acute post-surgical and long-term outcomes, classify biopsy and radiographic data, augment pharmacologic decision making, and accurately represent the complexity of host immune response. Yet, many of these applications exist in pre-clinical form only, supported primarily by evidence of single-center, retrospective studies. Prospective investigation of these technologies has the potential to unlock the potential of machine learning to augment solid organ transplantation clinical care and health care delivery systems.more » « less
An official website of the United States government
